Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract BackgroundSystematic literature reviews (SLRs) are foundational for synthesizing evidence across diverse fields and are especially important in guiding research and practice in health and biomedical sciences. However, they are labor intensive due to manual data extraction from multiple studies. As large language models (LLMs) gain attention for their potential to automate research tasks and extract basic information, understanding their ability to accurately extract explicit data from academic papers is critical for advancing SLRs. ObjectiveOur study aimed to explore the capability of LLMs to extract both explicitly outlined study characteristics and deeper, more contextual information requiring nuanced evaluations, using ChatGPT (GPT-4). MethodsWe screened the full text of a sample of COVID-19 modeling studies and analyzed three basic measures of study settings (ie, analysis location, modeling approach, and analyzed interventions) and three complex measures of behavioral components in models (ie, mobility, risk perception, and compliance). To extract data on these measures, two researchers independently extracted 60 data elements using manual coding and compared them with the responses from ChatGPT to 420 queries spanning 7 iterations. ResultsChatGPT’s accuracy improved as prompts were refined, showing improvements of 33% and 23% between the initial and final iterations for extracting study settings and behavioral components, respectively. In the initial prompts, 26 (43.3%) of 60 ChatGPT responses were correct. However, in the final iteration, ChatGPT extracted 43 (71.7%) of the 60 data elements, showing better performance in extracting explicitly stated study settings (28/30, 93.3%) than in extracting subjective behavioral components (15/30, 50%). Nonetheless, the varying accuracy across measures highlighted its limitations. ConclusionsOur findings underscore LLMs’ utility in extracting basic as well as explicit data in SLRs by using effective prompts. However, the results reveal significant limitations in handling nuanced, subjective criteria, emphasizing the necessity for human oversight.more » « lessFree, publicly-accessible full text available September 1, 2026
-
Free, publicly-accessible full text available May 1, 2026
-
Perforated microelectrode arrays (pMEAs) have become essential tools for ex vivo retinal electrophysiological studies. pMEAs increase the nutrient supply to the explant and alleviate the accentuated curvature of the retina, allowing for long-term culture and intimate contacts between the retina and electrodes for electrophysiological measurements. However, commercial pMEAs are not compatible with in situ high-resolution optical imaging and lack the capability of controlling the local microenvironment, which are highly desirable features for relating function to anatomy and probing physiological and pathological mechanisms in retina. Here we report on microfluidic pMEAs (μpMEAs) that combine transparent graphene electrodes and the capability of locally delivering chemical stimulation. We demonstrate the potential of μpMEAs by measuring the electrical response of ganglion cells to locally delivered high K + stimulation under controlled microenvironments. Importantly, the capability for high-resolution confocal imaging of the retina tissue on top of the graphene electrodes allows for further analyses of the electrical signal source. The new capabilities provided by μpMEAs could allow for retinal electrophysiology assays to address key questions in retinal circuitry studies.more » « less
An official website of the United States government
